This interesting article (Friese, 2025) proposes a methodological shift for qualitative data analysis (QDA) that moves beyond traditional coding by introducing Conversational Analysis with AI (CAAI). This approach can be realised by using Dr. Friese's own software, QInsights, replacing the process of coding -- segmenting and labelling data -- with a structured, dialogic interaction between the researcher and a large language model (LLM).
The Analytical Process#
The method uses a five-step process that focuses on synthesis instead of coding.
- Step 1: Get to know the data. The researcher uses the AI to create summaries and initial themes to identify key topics for exploration.
- Step 2: Prepare for analysis. The researcher picks a topic from the previous step and writes a set of questions to guide the AI conversation. This question list replaces a traditional coding frame and makes the analysis transparent. CAAI analysis proceeds topic by topic. This contrasts with conventional coding frame development.
- Step 3: Ask questions. Using the prepared questions, the researcher has a dialogue with the AI about a small subset of the data (e.g., 4-6 interviews). This helps the researcher find patterns and explore surprising findings.
- Step 4: Synthesize insights. The researcher slows down, reads the text of the conversation and writes a synthesis of the findings. This can be done alone or collaboratively with the AI. Steps 3 and 4 are repeated for each topic.
- Step 5: Elevate the analysis (Optional). The researcher can use the AI to help connect findings to broader theories.
How LLMs are Characterized#
The author views Large Language Models (LLMs) not as intelligent beings but as useful analytical partners. Their value comes from being trained on vast amounts of human-written, "socially-situated corpora". The article states that models lack true human understanding or lived experience, a concept referred to as "Seinsverbundenheit". However, their outputs are still insightful because they reflect the patterns and "lifeworlds" present in their training data. The AI’s knowledge is described as a "collective and distributed" echo of human meaning-making. In this role, the LLM acts as not only in a deductive and inductive fashion but also as an "abductive catalyst"—a tool that surfaces unexpected connections and provokes new ideas for the researcher, without needing to be intelligent itself.
This approach reframes the researcher's role from a coder to a conductor of an analytic conversation, prioritizing interpretation and synthesis over mechanical categorization. Following (Krähnke et al., 2025), Friese characterises this process as hermeneutical -- an approach in which meaning emerges through the researcher engaging in an open interaction with a text, in this case with the addition of a third party, the AI, which provokes and questions the process. "Meaning does not reside in the data itself but is generated by a recursive relationship between the analyst, the context, and the text" (p. 6).
Rigor and trustworthiness are established not through coding frames or inter-coder agreement, but through the transparency of the documented dialogue, traceability to source data via retrieval-augmented generation (RAG) systems (provision of quotes), and the somewhat replicable nature of the question sets that guide the inquiry. Ultimately, CAAI presents a post-coding paradigm where analysis emerges directly from a dynamic and reflexive engagement with the data, mediated by AI.
In the course of the article, Friese revisits the text analysis she conducted for her own PhD and candidly concludes that she could have done better, and much faster, using the method she proposes.
Strengths#
Reflection on use of LLMs#
Firstly, I'd congratulate Dr. Friese for presenting a deeply argued explanation of and justification for the QInsights workflow. As researchers, however we use LLMs, it is crucial that we continue to reflect on what we are doing, rather than letting the muscles of our critical thought go weak through lack of exercise.
LLM-supported QDA as hermeneutics#
Secondly, the advent of generative AI is a very good time to ask again: do we really need coding to make QDA rigorous? As she says: "qualitative research is entering a moment of methodological experimentation". After all, rigorous coding alone was never enough to make text analysis rigorous, not least because coding can ignore context and the positionality of the researcher and the research task. Re-positioning LLM-supported QDA in the broader context of hermeneutics is useful and refreshing.
Dangers of consumer-facing LLMs#
Thirdly, she reminds researchers that methodological rigour is very hard to achieve with unreflective use of consumer-facing LLMs like ChatGPT, not least because they will skim-read the text to find a quick answer and will too often grab a possibly incorrect answer from their training data rather than ground it in the actual text. What's more, a platform like QInsights can provide a completely documented workflow, where every conversation and decision made is recorded -- although we have no insight into the AI's 'thought processes' behind each of its contributions to this workflow beyond what it tells us.
A new kind of epistemic actor between human and machine#
I usually find arguments about whether generative AIs are "really" conscious or can "really" understand something as simply spurious, and no more interesting than arguing about whether a computer can really "copy" or "save" or "read" or even "predict" something. However, Friese has a really interesting angle on this, an angle which takes us further rather than trying to police useful language. She 'treats AI as a new kind of epistemic actor — neither a mere stochastic parrot nor a conscious knower, but as something else: a dialogic partner capable of generating insight through probabilistic modelling trained on socially situated corpora. “Construction” in CAAI is not solely human; it is distributed, emerging through interaction between human interpretation, data context, and machine-generated associations.' So an AI is more than "an assistant": it is an assistant which possesses, or is, a whole world of interconnected meanings.
Some caveats#
Against those very notable strengths, here are a couple of caveats.
How new is this? If it is new, is that really because of the inclusion of AI?#
So, we can drop coding and use an AI as a sort of meta-assistant to co-create meaning from text. My biggest question is: to the extent that the AI is not that different from a tireless, lightning-fast and knowledgeable human assistant, couldn't we drop the AI from the methodological equation and just say, here's one newish way to analyse texts hermeneutically? (Though you'd have to have plenty of time and either one very good human assistant or a team of normal ones for this to actually work.)
Is this new method really just a speeded-up version of what we could in theory have done with very fast human assistants? Or is does the sheer magnitude of that speed-up mean a kind of Hegelian transformation of quantity into quality: it's just so much faster that the result is qualitatively different? Or is there something else about the method which is fundamentally new?
Skipping the coding step does lose some reproducibility -- does the AI really change this?#
If the addition of AI support does not make this way of working fundamentally new, I don't really see how it gives us a free pass on all the original reasons for using coding in the first place. This method is presumably, if everything else is held equal, less reproducible than a method which does employ coding.
Dr. Friese does mention "Reapplication of refined questions across subgroups; independent synthesis by multiple researchers; replication over time" as possible ways to assess reproducibility.
This is where our approach to causal coding at Causal Map differs most strongly from CAAI: we only use the AI as a low-level assistant with narrowly defined tasks, so the workflow is less dependent on the stochastic nature of the AI's behaviour and even confines human input to the most crucial high-level decisions, and so is easier to reproduce.
Why this particular set of steps?#
It isn't hard at least in theory to think of endless variations of the 4 or 5 steps set out in the article -- for example the team-based approach suggested by (Krähnke et al., 2025), so what are the specific arguments for this particular set of steps? Why for example do we work topic by topic? I'm sure it's not the case that "anything goes", but why not?
Whose world?#
I am sure Dr. Friese would acknowledge that the world of "machine-generated associations" underpinning AI responses is not just given but is constructed in a very specific way by very specific organisations on a very specific set of training data. This fact has been repeated almost to exhaustion in recent writings on AI, but she does elevate this world to a kind of new meta-assistant, so it would be good to reflect once more on its make-up and provenance. You could see the meta-assistant as a combination of all human lifeworlds, except that of course it isn't -- because many people, and certain continents, and whole swathes of actual human non-digital life are not equally included.
In conclusion#
As Dr. Friese says, generative AI potentially opens up a whole world of possibilities for qualitative text analysis and indeed for social science in general. We need to be acutely aware of the multiple social, political, methodological and environmental risks of this technology, and at the same time not miss out on its benefits.
And, try out QInsights!
References
Friese (2025). Conversational Analysis with AI - CA to the Power of AI: Rethinking Coding in Qualitative Analysis. https://doi.org/10.2139/ssrn.5232579.
Krähnke, Pehl, & Dresing (2025). Hybride Interpretation Textbasierter Daten Mit Dialogisch Integrierten LLMs: Zur Nutzung Generativer KI in Der Qualitativen Forschung. DEU. https://www.ssoar.info/ssoar/handle/document/99389.